13 - Deep Learning - Activations, Convolutions, and Pooling Part 1 [ID:14344]
50 von 104 angezeigt

Welcome back to deep learning. So today's lecture we want to talk about activations

and convolutional neural networks. We will split this up into several videos

and the first one will be about activation functions. Later we will talk

about convolutional neural networks, convolution layers, pooling and so on.

Deep learning. So let's start with activation functions and you can see

that the activation functions go back to the biological motivation and we

remember that everything we've been doing so far we somehow also motivated

with the biological configuration. It's like, oh, AI. We see that these neurons

are being connected with synapses to other neurons and this way they can

actually communicate with each other. The synapses have this myelin sheath and

with this they can actually electrically be isolated and this way they are able

to communicate to other cells. Now when they are communicating they are not just

sending information, everything that they get in, but they have a selective

mechanism. So if you have some stimuli it actually does not suffice to have some

signal but the total signal must be above some threshold and what will then

happen is that an action potential is triggered, it repolarizes and then

returns to the resting state. Interestingly, it doesn't matter how

strongly the cell is activated, it is always returning the same action

potential and then it returns to its resting state. The actual biological

activation is even more complicated. So you have the different accents and they

are connected to the synapses in other neurons and on the path they are covered

within Schwann cells that then can deliver this action potential towards

the next synapse. There are ion channels that are actually used to stabilize

the entire electrical process and bring this whole thing again into equilibrium

after the activation pulse. So what we can see is the knowledge essentially lies

in the connections between the neurons. We have both inhibitory and excitatory

connections. The synapses anatomically enforce feed-forward processing, so it's

very similar to what we've seen so far. However, those connections can be in any

direction so they can also be cycles and you have entire networks of neurons that

are connected with different accents in order to form different cognitive

functions. Crucial is the sum of activations, only if the sum of

activations is above the threshold then you will actually end up with an

activation. And these activations are electric spikes with a specified

intensity and to be honest the whole system is also time dependent and they

also encode the entire information over time. So it's not just that we have a

single event that passes through but the whole process runs at a certain

frequency and this enables the entire processing over time. But it's all going

to happen. I mean we are going to get to human level intelligence. Now activations

in artificial neural networks so far they were nonlinear activation functions

and mainly motivated by the universal function approximation. So if we don't

have the nonlinearities we can't get a powerful network. Without the

nonlinearities we would just enable matrix multiplication after matrix

multiplication. Of course we are building on all these great abstractions that

people have invented over the millennia such as matrix multiplications. So

compared to biology we have some some sine function that can model all the

nothing response but generally our activations have no time component and

maybe this could be modeled by activation strength. The sine function of

course is mathematically unreciprocal because the derivative of the sine

function is zero everywhere except at zero where we have infinity. So this is

absolutely not suited for backpropagation. So far we've been using

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:10:01 Min

Aufnahmedatum

2020-04-27

Hochgeladen am

2020-04-28 00:36:14

Sprache

en-US

Deep Learning - Activations, Convolutions, and Pooling Part 1

This video presents the biological background of activation functions and the classical choices that were used for neural networks.

Video References:
Morf's Channel
Lex Fridman's Channel
Dragon Ball Scene

 

Further Reading:
A gentle Introduction to Deep Learning

Tags

introduction artificial intelligence deep learning machine learning pattern recognition Feedforward Networks activation functions
Einbetten
Wordpress FAU Plugin
iFrame
Teilen